14 research outputs found

    Improved Distance Queries and Cycle Counting by Frobenius Normal Form

    Get PDF
    Consider an unweighted, directed graph G with the diameter D. In this paper, we introduce the framework for counting cycles and walks of given length in matrix multiplication time O-tilde(n^omega). The framework is based on the fast decomposition into Frobenius normal form and the Hankel matrix-vector multiplication. It allows us to solve the following problems efficiently. * All Nodes Shortest Cycles - for every node return the length of the shortest cycle containing it. We give an O-tilde(n^omega) algorithm that improves the previous O-tilde(n^((omega + 3)/2)) algorithm for unweighted digraphs. * We show how to compute all D sets of vertices lying on cycles of length c in {1, ..., D} in randomized time O-tilde(n^omega). It improves upon an algorithm by Cygan where algorithm that computes a single set is presented. * We present a functional improvement of distance queries for directed, unweighted graphs. * All Pairs All Walks - we show almost optimal O-tilde(n^3) time algorithm for all walks counting problem. We improve upon the naive O(D n^omega) time algorithm

    Equal-Subset-Sum Faster Than the Meet-in-the-Middle

    Get PDF
    In the Equal-Subset-Sum problem, we are given a set S of n integers and the problem is to decide if there exist two disjoint nonempty subsets A,B subseteq S, whose elements sum up to the same value. The problem is NP-complete. The state-of-the-art algorithm runs in O^*(3^(n/2)) <= O^*(1.7321^n) time and is based on the meet-in-the-middle technique. In this paper, we improve upon this algorithm and give O^*(1.7088^n) worst case Monte Carlo algorithm. This answers a question suggested by Woeginger in his inspirational survey. Additionally, we analyse the polynomial space algorithm for Equal-Subset-Sum. A naive polynomial space algorithm for Equal-Subset-Sum runs in O^*(3^n) time. With read-only access to the exponentially many random bits, we show a randomized algorithm running in O^*(2.6817^n) time and polynomial space

    On Problems Equivalent to (min,+)-Convolution

    Get PDF
    In the recent years, significant progress has been made in explaining apparent hardness of improving over naive solutions for many fundamental polynomially solvable problems. This came in the form of conditional lower bounds -- reductions from a problem assumed to be hard. These include 3SUM, All-Pairs Shortest Paths, SAT and Orthogonal Vectors, and others. In the (min,+)-convolution problem, the goal is to compute a sequence c, where c[k] = min_i a[i]+b[k-i], given sequences a and b. This can easily be done in O(n^2) time, but no O(n^{2-eps}) algorithm is known for eps > 0. In this paper we undertake a systematic study of the (min,+)-convolution problem as a hardness assumption. As the first step, we establish equivalence of this problem to a group of other problems, including variants of the classic knapsack problem and problems related to subadditive sequences. The (min,+)-convolution has been used as a building block in algorithms for many problems, notably problems in stringology. It has also already appeared as an ad hoc hardness assumption. We investigate some of these connections and provide new reductions and other results

    Improving Schroeppel and Shamir's algorithm for subset sum via orthogonal vectors.

    No full text
    We present an O∗(20.5n) time and O∗(20.249999n) space randomized algorithm for solving worst-case Subset Sum instances with n integers. This is the first improvement over the long-standing O∗(2n/2) time and O∗(2n/4) space algorithm due to Schroeppel and Shamir (FOCS 1979). We breach this gap in two steps: (1) We present a space efficient reduction to the Orthogonal Vectors Problem (OV), one of the most central problem in Fine-Grained Complexity. The reduction is established via an intricate combination of the method of Schroeppel and Shamir, and the representation technique introduced by Howgrave-Graham and Joux (EUROCRYPT 2010) for designing Subset Sum algorithms for the average case regime. (2) We provide an algorithm for OV that detects an orthogonal pair among N given vectors in {0,1}d with support size d/4 in time Õ(N· 2d/d d/4). Our algorithm for OV is based on and refines the representative families framework developed by Fomin, Lokshtanov, Panolan and Saurabh (J. ACM 2016). Our reduction uncovers a curious tight relation between Subset Sum and OV, because any improvement of our algorithm for OV would imply an improvement over the runtime of Schroeppel and Shamir, which is also a long standing open problem

    A Faster Exponential Time Algorithm for Bin Packing With a Constant Number of Bins via Additive Combinatorics.

    No full text
    n the Bin Packing problem one is given n items with weights w1, …, wn and m bins with capacities c1, …, cm. The goal is to find a partition of the items into sets S1, …, Sm such that w(Sj) ≤ cj for every bin j, where w(X) denotes Σi∊xwi. Björklund, Husfeldt and Koivisto (SICOMP 2009) presented an time algorithm for Bin Packing. In this paper, we show that for every m ∊ ℕ there exists a constant σm > 0 such that an instance of Bin Packing with m bins can be solved in randomized time. Before our work, such improved algorithms were not known even for m equals 4. A key step in our approach is the following new result in Littlewood-Offord theory on the additive combinatorics of subset sums: For every δ > 0 there exists an ∊ > 0 such that if |{X ⊆ {1, …, n} : w(X) = v}| ≥ 2(1–∊)n for some v then |{w(X) : X ⊆ {1, …, n}}| ≤ 2δn

    Hamiltonian Cycle Parameterized by Treedepth in Single Exponential Time and Polynomial Space

    No full text
    For many algorithmic problems on graphs of treewidth t, a standard dynamic programming approach gives algorithms with time and space complexity 2 (*'-n ' K It turns out that when one considers the more restrictive parameter treedepth, it is often the case that a variation of this technique can be used to reduce the space complexity to polynomial, while retaining time complexity of the form TPv*) ¦ n ^ \ where d is the treedepth. This transfer of methodology is, however, far from automatic. For instance, for problems with connectivity constraints, standard dynamic programming techniques give algorithms with time and space complexity 2 *• g on graphs of treewidth t, but it is not clear how to convert them into time-efficient polynomial space algorithms for graphs of low treedepth. Cygan et al.[ACM Trans. Algorithms, 18 (2022), 17] introduced the Cut&Count technique and showed that a certain class of problems with connectivity constraints can be solved in time and space complexity 2*-^*J-n ^K Recently, Hegerfeld and Kratsch (STACS'20) showed that, for some of those problems, the Cut&Count technique can be also applied in the setting of treedepth, and it gives algorithms with running time 2<^(<*J. n^1-1-* and polynomial space usage. However, several important problems eluded such a treatment, with the most prominent examples being Hamiltonian Cycle and Longest Path. In this paper, we clarify the situation by showing that Hamiltonian cycle, Hamiltonian Path, Long Cycle, Long Path, and Min Cycle Cover all admit 5 • n ^ '-time and polynomial space algorithms on graphs of treedepth d. The algorithms are randomized Monte Carlo with only false negatives

    Improving Schroeppel and Shamir's algorithm for subset sum via orthogonal vectors.

    No full text
    We present an O∗(20.5n) time and O∗(20.249999n) space randomized algorithm for solving worst-case Subset Sum instances with n integers. This is the first improvement over the long-standing O∗(2n/2) time and O∗(2n/4) space algorithm due to Schroeppel and Shamir (FOCS 1979). We breach this gap in two steps: (1) We present a space efficient reduction to the Orthogonal Vectors Problem (OV), one of the most central problem in Fine-Grained Complexity. The reduction is established via an intricate combination of the method of Schroeppel and Shamir, and the representation technique introduced by Howgrave-Graham and Joux (EUROCRYPT 2010) for designing Subset Sum algorithms for the average case regime. (2) We provide an algorithm for OV that detects an orthogonal pair among N given vectors in {0,1}d with support size d/4 in time Õ(N· 2d/d d/4). Our algorithm for OV is based on and refines the representative families framework developed by Fomin, Lokshtanov, Panolan and Saurabh (J. ACM 2016). Our reduction uncovers a curious tight relation between Subset Sum and OV, because any improvement of our algorithm for OV would imply an improvement over the runtime of Schroeppel and Shamir, which is also a long standing open problem

    A Faster Exponential Time Algorithm for Bin Packing With a Constant Number of Bins via Additive Combinatorics.

    No full text
    n the Bin Packing problem one is given n items with weights w1, …, wn and m bins with capacities c1, …, cm. The goal is to find a partition of the items into sets S1, …, Sm such that w(Sj) ≤ cj for every bin j, where w(X) denotes Σi∊xwi. Björklund, Husfeldt and Koivisto (SICOMP 2009) presented an time algorithm for Bin Packing. In this paper, we show that for every m ∊ ℕ there exists a constant σm > 0 such that an instance of Bin Packing with m bins can be solved in randomized time. Before our work, such improved algorithms were not known even for m equals 4. A key step in our approach is the following new result in Littlewood-Offord theory on the additive combinatorics of subset sums: For every δ > 0 there exists an ∊ > 0 such that if |{X ⊆ {1, …, n} : w(X) = v}| ≥ 2(1–∊)n for some v then |{w(X) : X ⊆ {1, …, n}}| ≤ 2δn

    Coverability in VASS revisited : improving Rackoff's Bound to Obtain Conditional Optimality

    No full text
    Seminal results establish that the coverability problem for Vector Addition Systems with States (VASS) is in EXPSPACE (Rackoff, '78) and is EXPSPACE-hard already under unary encodings (Lipton, '76). More precisely, Rosier and Yen later utilise Rackoff's bounding technique to show that if coverability holds then there is a run of length at most n2O(dlogd), where d is the dimension and n is the size of the given unary VASS. Earlier, Lipton showed that there exist instances of coverability in d-dimensional unary VASS that are only witnessed by runs of length at least n2Ω(d). Our first result closes this gap. We improve the upper bound by removing the twice-exponentiated log(d) factor, thus matching Lipton's lower bound. This closes the corresponding gap for the exact space required to decide coverability. This also yields a deterministic n2O(d)-time algorithm for coverability. Our second result is a matching lower bound, that there does not exist a deterministic n2o(d)-time algorithm, conditioned upon the Exponential Time Hypothesis. When analysing coverability, a standard proof technique is to consider VASS with bounded counters. Bounded VASS make for an interesting and popular model due to strong connections with timed automata. Withal, we study a natural setting where the counter bound is linear in the size of the VASS. Here the trivial exhaustive search algorithm runs in O(nd+1)-time. We give evidence to this being near-optimal. We prove that in dimension one this trivial algorithm is conditionally optimal, by showing that n2−o(1)-time is required under the k-cycle hypothesis. In general fixed dimension d, we show that nd−2−o(1)-time is required under the 3-uniform hyperclique hypothesis
    corecore